Right now, we have an application ready and setup, but nothing on screen. Or more exactly, we only have a constant color for the whole window. It is time to change that !
In this tutorial, we will present how to bring a 3D mesh to the screen. This will go through creating it out of the API or loading it through an external file.
All code given here should be done before entering the run loop.
Let's start by creating that mesh from the code. This will enable us to have maximum control over what it will look like.
Let's start by including everything necessary :
Now, we can first create the mesh :
This is a pattern you will often find in the component, and in the engine in general :
In this case, the MeshManager is responsible for all memory allocations concerning meshes.
Creating a resource goes through the createOrRetrieve function, creating the resource if unavailable, or retrieving the one already there else.
This pattern has some benefits :
Now that the mesh is created, we can set it up. There are different ways, allowing for either maximum control or carefree setup. Let's first see how to use the standard mesh API, by looking at how data can be provided with it.
First step is to shape our data as we see fit. In this example, we will use a simple buffer of vectors (4 contiguous floats), each entry supposed to describe a vertex's position.
Our data describe a simple triangle. This will be the CPU data we want to provide to the mesh's vertex buffer.
Having the mesh know about the data is good, but we also need to make it know about how to use it.
For this purpose, we need a MeshInputLayout, that is formed by 1 or more MeshInputLayoutAttribute.
The input layout will allow the engine to link what the mesh has to offer to a Program's code (HLSL or GLSL), effectively enabling correct usage and intepretation of given data.
In this case, our layout will be formed by one only attribute, which should be representing the position of the vertices.
One important notion here is that an attribute has a name, called semantic name, which is like a unique identifier for it.
In HLSL, this semantic name is always specified next to the attribute declaration.
In GLSL, for nkGraphics, this semantic name is the declaration name of the attribute.
Having all of this aligned will ensure all data is correctly interpreted.
Here, we choose its semantic name to be 'POSITION', which is the default name nkGraphics will look for in its built-in shader programs, for positions.
nkGraphics will sometimes require some attributes to represent something (positions, normals...).
In such case, the input layout can set specific names through the annotations. Please see the documentation for more information on that matter.
To conclude on the layout, the attribute does not require anything different from the defaults. When not overloaded in the attribute description, the attribute offset and buffer stride will be computed automatically based on the input layout. But beware this depends on the order the attributes are added to the layout. Attributes fed first will be considered first in the buffer they are tied to.
Last step is to let the mesh know about the data, the layout, and how many points there are inside (this drives the drawing commands).
One interesting point is that a Mesh offers different ways to provide data to it.
Data can be copied over, forwarded, or simply referenced.
Depending on the lifetime and usage of your data, you can choose either freely.
In this case, we know we won't use the buffer's data afterwards, so we choose to forward the data to the mesh, to avoid copying data over.
Note that copying would be possible in this case although wasteful.
However, referencing would be dangerous as the mesh loads asynchroneously, and our buffer would free its data when going out of scope.
Let's work towards next step :
In this example, we will provide index data. This is optional, especially in this case, as non-indexed vertex data will be interpreted as a list of vertices, interpreted according to their topology (triangle list in this case). But for demonstration purpose, we will go with it.
Note how we index by switching the vertices at index 1 and 2, to account for counter-clockwise front.
This is the default for the rasterization state when drawing.
What is left is to forward the buffer we won't reuse, and set the index count for drawing.
Finally, the mesh can be loaded. Here is another pattern you will often encounter in the engine. Resources are always set with :
Such a pattern allows :
With that in mind, a last call has to be made :
Now our mesh is ready to be used ! If the loading step fails, this method will return false and it will log what went wrong.
The basic API allows for a lot of control, which provides all the flexibility you could need when designing your data formats. However, sometimes you just want a fast and easy way to setup a mesh. This is where MeshUtils can help you.
Let's include what we need from it :
We can then use high-level structures to fill in what we want the mesh to offer. Keep in mind that this data will be translated into lower level buffers and input layout, meaning that there is overhead to use it. Let's see how to work with it :
We create a buffer of VertexData, each entry representing one vertex.
This structure then has members you can feed with the data you need.
To make the parallel with last example, we will do the same triangle using this approach.
This means we feed the position attribute of the structure, for each vertex.
Then, by filling the VertexComposition structure, we provide information about what data translation should take into account :
With both objects, we can call MeshUtils::packIntoMeshData, which will translate our list of points and the composition requested into a binary buffer and an input layout.
This packed data can then be used to feed the mesh, without having to worry about alignment or format issues.
Data can be freed using the same forward operation in this case : we won't reuse generated data, so this is perfectly safe.
Remaining operations (index buffer, loading) can be done exactly the same way as in our first example.
This operation only translates vertices into a compact version usable by the mesh. Indexing, topology and such can still be freely altered.
Now that the mesh is ready, we need to tell the component that we want it painted during the rendering step. For that, we need to tinker with render queues.
A render queue is exactly what its name implies : it queues objects that need to be rendered. They are used within the passes used for image composition. We will cover this in a later tutorial, so for now let's focus on the rendering queue. First, include :
With those includes, we can start messing with the render queues :
Here we get the queue provided by default by the component, using its statically accessible name within the Manager. This is the queue used by default image composition when painting the scene. As a result, by altering it, we will change what is rendered right away.
What is happening here is that we are adding an "entity" to the rendering queue. An entity is representing an object enqueued for rendering.
On it, we will be able to set all information we need.
An entity has to get what it needs to render through the EntityRenderInfo.
These information will allow to define groups and their LODs (Level Of Detail), but this will be introduced at a later time.
For now, we simply use the shortcut constructor allowing to request an entity to render a mesh, with a given shader.
The shader being nullptr in this case, the component will use one of the default shaders it offers depending on the mesh's layout.
With all of that we are ready to give a new run on our program. Let's see what it looks like :
To recap, for a mesh, we need to :
By respecting all these steps, a mesh can easily be part of the rendering pipeline.
Creating our own mesh from scrach can be useful in some cases, but in others we might want to load an already prepared mesh, out of a file. Let's load a sphere from an obj file, that you will find within the release Data folder (Meshes/sphere.obj).
Also, keep track of the folder given to the ResourceManager. As a reminder, the working path set within it will be considered the root folder to use.
So let's take things back after mesh creation from the manager. The only step we will need is :
Loading the mesh, provided the file can be found from the root path, will load the file data, and feed the mesh with it automatically.
For that, it will query the CompositeEncoder with sources found, using default decoding options.
If successfully parsed, the mesh will use the first DecodedMeshData found to automatically create its vertex buffers.
And with all of that, the sphere should be ready.
However, in this case, we would want to have more control. Linked mesh has been exported with inverted texture coordinates on the Y axis, and the front was -Z, altering the winding order within the triangles. Let's see how we can address these issues. Let's include :
The ObjEncoder is the class that will be able to interpret the Obj format and create mesh information from it.
Let's see how we can use it to load data, while altering the way it behaves :
First thing we need is the data the encoder needs to interpret.
For that, we use the ResourceManager to find back the absolute path and load the data into a buffer.
Then, we prepare some options for the encoder to alter the way it imports the data.
To account for the texture coordinates, we ask the encoder to invert the Y axis.
Then, the front -Z axis is a problem for our default winding order, so we request it to invert the winding order to fix the direction the surface is facing.
Once all of that is setup, we can request the parsing of data we retrieved, with the options requested.
Decoding can provide with more than 1 mesh. Obj files can have many groups, and future formats can expose many primitives or meshes too.
As such, it is important to know which mesh you want to import, or prepare a logic to cope with that.
In all cases, we know that for this file, we have one mesh, so we will use the first entry.
We could interpret all the information from the DecodedMeshData structure ourselves, but there is also a utility function, MeshUtils::fillMeshFromDecodedData, that we use here.
Like for decoding, there are options we can tune for the filling. We choose to let it forward data (which means our structure will be altered) to the mesh and load it for us once ready.
With this, we can alter how the mesh is decoded. This can prove useful to work with different exporters without having to reprocess the data.
So now, we have altered the decoding logic to cope with our local space, decoded the obj data, filled the mesh from the decoded information and automatically loaded it. Let's see what we currentlty have on screen :
Current image is not very encouraging, but fear not ! This is normal. First, ensure that the MeshLoader is not complaining that the file cannot be loaded. If it is, ensure the path you give is correct and relative to the working path set within the ResourceManager.
Then, if nothing is logged, the mesh has been successfully loaded and is being displayed. But why aren't we seeing anything ?
You maybe guessed it, our mesh is simply centered right where our camera is, in (0, 0, 0). As a result, we are seeing the sphere from inside. The back faces being culled away by default, the component doesn't render anything.
The way to correct that is to move it, through the use of a node graph. Each Entity can be linked to a Node, that will give its position, orientation, and scale within a scene graph. By default, an Entity won't be tied to any node, which means its world coordinates will correspond to its model coordinates.
Let's change that, and include :
With all of this, we will be able to manipulate nodes, like this :
First, request a creation from the manager.
Then, this node will be translated within the world, as we set its absolute position. We set it at 10 units in front of the camera.
The node is then assigned to the entity, so that it can transform its positioning. Let's see the effect of those lines :
Now our sphere is set further away from the camera, and we can witness it in its glorious shape, from outside. Using the node graph is optional, but when you need to move objects around, it is the way to go.
And this covers the basic interactions with meshes !